A Stochastic Second-Order Proximal Method for Distributed Optimization

نویسندگان

چکیده

We propose a distributed stochastic second-order proximal (St-SoPro) method that enables agents in network to cooperatively minimize the sum of their local loss functions without any centralized coordination. St-SoPro incorporates decentralized approximation into an augmented Lagrangian function, and randomly samples gradients Hessian matrices update, so it is efficient solving large-scale problems. show for restricted strongly convex smooth problems, linearly converge expectation neighborhood optimum, can be arbitrarily small under proper parameter settings. Simulations over real machine learning datasets demonstrate outperforms several state-of-the-art methods terms convergence speed as well computation communication costs.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization

We analyze stochastic gradient algorithms for optimizing nonconvex, nonsmooth finite-sum problems. In particular, the objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a possibly non-differentiable but convex component. We propose a proximal stochastic gradient algorithm based on variance reduction, called ProxSVRG+. The algorithm is ...

متن کامل

Second Order Stochastic Optimization in Linear Time

First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high cost of computing the second-order information. In this paper we develop second-order stochastic methods for optimization problems in machine ...

متن کامل

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and Its Implications to Second-Order Methods

This paper presents an accelerated variant of the hybrid proximal extragradient (HPE) method for convex optimization, referred to as the accelerated HPE (A-HPE) framework. Iterationcomplexity results are established for the A-HPE framework, as well as a special version of it, where a large stepsize condition is imposed. Two specific implementations of the A-HPE framework are described in the co...

متن کامل

L-sr1: a Second Order Optimization Method for Deep Learning

We describe L-SR1, a new second order method to train deep neural networks. Second order methods hold great promise for distributed training of deep networks. Unfortunately, they have not proven practical. Two significant barriers to their success are inappropriate handling of saddle points, and poor conditioning of the Hessian. L-SR1 is a practical second order method that addresses these conc...

متن کامل

A second-order conic optimization-based method for visual servoing

0957-4158/$ see front matter 2012 Elsevier Ltd. A doi:10.1016/j.mechatronics.2012.01.010 ⇑ Corresponding author. Tel.: +1 416 979 5000x709 E-mail addresses: [email protected], eissa.nemato Nematollahi), [email protected] (A. V (F. Janabi-Sharifi). 1 Present address: Rotman School of Management, U This work presents a novel method for the visual servoing control problem based on second...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Control Systems Letters

سال: 2023

ISSN: ['2475-1456']

DOI: https://doi.org/10.1109/lcsys.2023.3244740